27 research outputs found
An Empirical Study of the Use of Integrity Verification Mechanisms for Web Subresources
Web developers can (and do) include subresources such as scripts, stylesheets and images in their webpages. Such subresources might be stored on remote servers such as content delivery networks (CDNs). This practice creates security and privacy risks, should a subresource be corrupted, as was recently the case for the British Airways websites. The subresource integrity (SRI) recommendation, released in mid-2016 by the W3C, enables developers to include digests in their webpages in order for web browsers to verify the integrity of subresources before loading them. In this paper, we conduct the first large-scale longitudinal study of the use of SRI on the Web by analyzing massive crawls (3B unique URLs) of the Web over the last 3.5 years. Our results show that the adoption of SRI is modest (3.40%), but grows at an increasing rate and is highly influenced by the practices of popular library developers (e.g., Bootstrap) and CDN operators (e.g., jsDelivr). We complement our analysis about SRI with a survey of web developers (N =227): It shows that a substantial proportion of developers know SRI and understand its basic functioning, but most of them ignore important aspects of the specication, such as the case of malformed digests. The results of the survey also show that the integration of SRI by developers is mostly manual-hence not scalable and error prone. This calls for a better integration of SRI in build tools
Evaluating the End-User Experience of Private Browsing Mode
Nowadays, all major web browsers have a private browsing mode. However, the
mode's benefits and limitations are not particularly understood. Through the
use of survey studies, prior work has found that most users are either unaware
of private browsing or do not use it. Further, those who do use private
browsing generally have misconceptions about what protection it provides.
However, prior work has not investigated \emph{why} users misunderstand the
benefits and limitations of private browsing. In this work, we do so by
designing and conducting a three-part study: (1) an analytical approach
combining cognitive walkthrough and heuristic evaluation to inspect the user
interface of private mode in different browsers; (2) a qualitative,
interview-based study to explore users' mental models of private browsing and
its security goals; (3) a participatory design study to investigate why
existing browser disclosures, the in-browser explanations of private browsing
mode, do not communicate the security goals of private browsing to users.
Participants critiqued the browser disclosures of three web browsers: Brave,
Firefox, and Google Chrome, and then designed new ones. We find that the user
interface of private mode in different web browsers violates several
well-established design guidelines and heuristics. Further, most participants
had incorrect mental models of private browsing, influencing their
understanding and usage of private mode. Additionally, we find that existing
browser disclosures are not only vague, but also misleading. None of the three
studied browser disclosures communicates or explains the primary security goal
of private browsing. Drawing from the results of our user study, we extract a
set of design recommendations that we encourage browser designers to validate,
in order to design more effective and informative browser disclosures related
to private mode
SoK: hate, harassment, and the changing landscape of online abuse
We argue that existing security, privacy, and antiabuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks—such as toxic content and surveillance—that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.Accepted manuscrip